Stien Mommen, David Matischek, Jeroen Timmerman, Lexi Li, & Emma Akrong
March 29th, 2025
Process of choosing between N alternatives N racing evidence accumulators
Assumptions:
Lexical decision task word vs. non-word
Participants: 4
Trials: 10,000
Conditions: Speed vs.Accuracy (between subjects design)
DR implementation:
Additional information beyond response choice and RT
Better understanding of the decision-making process as a whole
Determine if including DR will make the model learn more
MCMC more computationally costly
BayesFlow: training is costly, but inference is fast
RDM (base model for all)
change in evidence
drift rate of choice
difference in time
scale of within-trial noise for choice
random variable
Feed-forward inhibition (FFI)
amount of inhibition
change in evidence
drift rate of choice
difference in time
scale of within-trial noise for choice , random variable
Leaky-competing accumulator (LCA)
leakage rate
amount of inhibition
change in evidence, drift rate of choice , difference in time, scale of within-trial noise for choice , random variable
@nb.jit(nopython=True, cache=True) def trial(drift, starting_point, boundary, ndt, max_t, max_drt=0.25, s=1, dt=None): drift = np.asarray(drift, dtype=np.float64) # Convert before passing to JIT response = -1 rt = -1 if dt is None: dt = max_t / 10_000.0 # Ensure float division t = 0 start = float(starting_point) # Ensure float type evidence = np.random.uniform(0, start, size=len(drift)) boundary += start # Adjust boundary based on start dr = False # Initialize double response # Initial evidence accumulation while np.all(evidence < boundary) and t < max_t: for resp in range(len(drift)): # Normal loop (prange if parallel) evidence[resp] += dt * drift[resp] + np.random.normal(0, np.sqrt(s**2 * dt)) t += dt rt = t + ndt drt = 0 response_arr = np.where(evidence > boundary)[0] # Avoid tuple issue if response_arr.size > 0: response = response_arr[0] # Take first element else: response = -1 # Default # Double response accumulation while drt < max_drt and not dr: for resp in range(len(drift)): if response != -1 and resp != response: evidence[resp] += dt * drift[resp] + np.random.normal(0, s) * np.sqrt(dt) if evidence[resp] >= boundary: dr = True break drt += dt # Only increase while dr is False return rt, response, dr, drt
Adding double response did kind of lead to the model to learn more
RDM
FFI
LCA
It also kind of improved the posterior contraction
RDM
FFI
LCA
Priors can be more grounded in theory
More participants, more double response
Accuracy Participant 2
| Initial Response | First RT | Double Response | Double Response RT | |
|---|---|---|---|---|
| 6661 | FALSE | 0.4852 | TRUE | 0.0458 |
| 7372 | FALSE | 0.4194 | TRUE | 0.0540 |
| 9323 | FALSE | 0.4851 | TRUE | 0.0700 |
Study design explicit double responses
Alternative definition of double response loser drift over takes winner drift